project euphonia
For people who stutter, the convenience of voice assistant technology remains out of reach
Do you ever feel as if your voice assistants – whether Siri, Alexa, or Google – don't understand you? You might repeat your question a little slower, a little louder, but eventually you'll get the information you were asking for read back to you in the pleasing but lifeless tones of your voice-activated assistant. That's the question facing many of the 3 million people in the United States who stutter, plus the thousands of others who have impaired speech not limited to stuttering, and many are feeling left out. "When this stuff first started coming out, I was all over it," said Jacquelyn Joyce Revere, a screenwriter from Los Angeles who stutters. "In LA, I need GPS all the time, so this seemed like a more convenient way to live the life I want to live."
- North America > United States > California > Los Angeles County > Los Angeles (0.25)
- North America > United States > Washington > King County > Kent (0.05)
- North America > United States > California > Sacramento County > Elk Grove (0.05)
DeepMind and Google recreate former NFL linebacker Tim Shaw's voice using AI
In August, Google AI researchers working with the ALS Therapy Development Institute shared details about Project Euphonia, a speech-to-text transcription service for people with speaking impairments. They showed that, using data sets of audio from both native and non-native English speakers with neurodegenerative diseases and techniques from Parrotron, an AI tool for people with impediments, they could drastically improve the quality of speech synthesis and generation. Recently, in something of a case study, Google researchers and a team from Alphabet's DeepMind employed Euphonia in an effort to recreate the original voice of Tim Shaw, a former NFL football linebacker who played for the Carolina Panthers, Jacksonville Jaguars, Chicago Bears, and Tennessee Titans before retiring in 2013. Roughly six years ago, Shaw was diagnosed with ALS, which requires him to use a wheelchair and left him unable to speak, swallow, or breathe without assistance. Over the course of six months, the joint research team adapted a generative AI model -- WaveNet -- to the task of synthesizing speech from samples of Shaw's voice prior to his ALS diagnoses.
- North America > United States > Tennessee (0.25)
- North America > United States > Illinois > Cook County > Chicago (0.25)
- Europe > United Kingdom (0.05)
- Asia > India (0.05)
- Leisure & Entertainment > Sports > Football (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Amyotrophic Lateral Sclerosis (ALS) (0.36)
Google devises conversational AI that works better for people with ALS and accents
Google AI researchers working with the ALS Therapy Development Institute today shared details about Project Euphonia, a speech-to-text transcription service for people with speaking impairments. They also say their approach can improve automatic speech recognition for people with non-native English accents as well. People with amyotrophic lateral sclerosis (ALS) often have slurred speech, but existing AI systems are typically trained on voice data without any affliction or accent. The new approach is successful primarily due to the introduction of small amounts of data that represents people with accents and ALS. "We show that 71% of the improvement comes from only 5 minutes of training data," according to a paper published on arXiv July 31 titled "Personalizing ASR for Dysarthric and Accented Speech with Limited Data."